331 research outputs found

    Bayesian Conditioning, the Reflection Principle, and Quantum Decoherence

    Get PDF
    The probabilities a Bayesian agent assigns to a set of events typically change with time, for instance when the agent updates them in the light of new data. In this paper we address the question of how an agent's probabilities at different times are constrained by Dutch-book coherence. We review and attempt to clarify the argument that, although an agent is not forced by coherence to use the usual Bayesian conditioning rule to update his probabilities, coherence does require the agent's probabilities to satisfy van Fraassen's [1984] reflection principle (which entails a related constraint pointed out by Goldstein [1983]). We then exhibit the specialized assumption needed to recover Bayesian conditioning from an analogous reflection-style consideration. Bringing the argument to the context of quantum measurement theory, we show that "quantum decoherence" can be understood in purely personalist terms---quantum decoherence (as supposed in a von Neumann chain) is not a physical process at all, but an application of the reflection principle. From this point of view, the decoherence theory of Zeh, Zurek, and others as a story of quantum measurement has the plot turned exactly backward.Comment: 14 pages, written in memory of Itamar Pitowsk

    Divergent mathematical treatments in utility theory

    Get PDF
    In this paper I study how divergent mathematical treatments affect mathematical modelling, with a special focus on utility theory. In particular I examine recent work on the ranking of information states and the discounting of future utilities, in order to show how, by replacing the standard analytical treatment of the models involved with one based on the framework of Nonstandard Analysis, diametrically opposite results are obtained. In both cases, the choice between the standard and nonstandard treatment amounts to a selection of set-theoretical parameters that cannot be made on purely empirical grounds. The analysis of this phenomenon gives rise to a simple logical account of the relativity of impossibility theorems in economic theory, which concludes the paper

    Linear Ramsey Numbers

    Get PDF

    First-Order and Second-Order Ambiguity Aversion

    Full text link

    The bearable lightness of being

    No full text
    How are philosophical questions about what kinds of things there are to be understood and how are they to be answered? This paper defends broadly Fregean answers to these questions. Ontological categories-such as object, property, and relation-are explained in terms of a prior logical categorization of expressions, as singular terms, predicates of varying degree and level, etc. Questions about what kinds of object, property, etc., there are are, on this approach, reduce to questions about truth and logical form: for example, the question whether there are numbers is the question whether there are true atomic statements in which expressions function as singular terms which, if they have reference at all, stand for numbers, and the question whether there are properties of a given type is a question about whether there are meaningful predicates of an appropriate degree and level. This approach is defended against the objection that it must be wrong because makes what there depend on us or our language. Some problems confronting the Fregean approach-including Frege's notorious paradox of the concept horse-are addressed. It is argued that the approach results in a modest and sober deflationary understanding of ontological commitments

    Evidence: A Guide for the Uncertain

    Get PDF
    Assume that it is your evidence that determines what opinions you should have. I argue that since you should take peer disagreement seriously, evidence must have two features. (1) It must sometimes warrant being modest: uncertain what your evidence warrants, and (thus) uncertain whether you’re rational. (2) But it must always warrant being guided: disposed to treat your evidence as a guide. Surprisingly, it is very difficult to vindicate both (1) and (2). But diagnosing why this is so leads to a proposal—Trust—that is weak enough to allow modesty but strong enough to yield many guiding features. In fact, I claim that Trust is the Goldilocks principle—for it is necessary and sufficient to vindicate the claim that you should always prefer to use free evidence. Upshot: Trust lays the foundations for a theory of disagreement and, more generally, an epistemology that permits self-doubt—a modest epistemology

    Degrees of belief, expected and actual

    Get PDF
    A framework of degrees of belief, or credences, is often advocated to model our uncertainty about how things are or will turn out. It has also been employed in relation to the kind of uncertainty or indefiniteness that arises due to vagueness, such as when we consider “a is F” in a case where a is borderline F. How should we understand degrees of belief when we take into account both these phenomena? Can the right kind of theory of the semantics of vagueness help us answer this? Nicholas J.J. Smith defends a unified account, according to which “degree of belief is expected truth-value”; this builds on his Degree Theory of vagueness that offers an account of the semantics and logic of vagueness in terms of degrees of truth. I argue that his account fails. Degree theories of vagueness do not help us understand degrees of belief and, I argue, we shouldn’t expect a theory of vagueness to yield a detailed uniform story about this. The route from the semantics to psychological states needn’t be straightforward or uniform even before we attempt to combine vagueness with probabilistic uncertainty

    Rentabilitätsvergleiche im Umlage- und Kapitaldeckungsverfahren : Konzepte, empirische Ergebnisse, sozialpolitische Konsequenzen

    Full text link
    Die demographischen Veränderungen sind Auslöser einer grundsätzlicheren Debatte über Alterssicherungsverfahren, nämlich der Wahl eines effizienten Finanzierungsverfahrens der Altersvorsorge. Im Zentrum der Debatte steht immer wieder der Renditevergleich zwischen dem Umlage- und dem Kapitaldeckungsverfahren. Ihm gilt dieses Papier. Er ist keineswegs so einfach, wie es oft suggeriert wird, da Versicherungs- und Risikoaspekte, vor allem aber das Übergangsproblem berücksichtigt werden müssen. Der vorliegende Beitrag stellt den wirtschaftstheoretischen Hintergrund mit den wichtigsten relevanten Konzepten dar und präsentiert empirische Schätzungen zur heutigen und Simulationsergebnisse zur zukünftigen Entwicklung der relevanten Renditen. Wir schließen mit den sozialpolitischen Konsequenzen für eine reformierte Altersvorsorge

    Reciprocity as a foundation of financial economics

    Get PDF
    This paper argues that the subsistence of the fundamental theorem of contemporary financial mathematics is the ethical concept ‘reciprocity’. The argument is based on identifying an equivalence between the contemporary, and ostensibly ‘value neutral’, Fundamental Theory of Asset Pricing with theories of mathematical probability that emerged in the seventeenth century in the context of the ethical assessment of commercial contracts in a framework of Aristotelian ethics. This observation, the main claim of the paper, is justified on the basis of results from the Ultimatum Game and is analysed within a framework of Pragmatic philosophy. The analysis leads to the explanatory hypothesis that markets are centres of communicative action with reciprocity as a rule of discourse. The purpose of the paper is to reorientate financial economics to emphasise the objectives of cooperation and social cohesion and to this end, we offer specific policy advice

    Priority for the Worse Off and the Social Cost of Carbon

    Get PDF
    The social cost of carbon (SCC) is a monetary measure of the harms from carbon emission. Specifically, it is the reduction in current consumption that produces a loss in social welfare equivalent to that caused by the emission of a ton of CO2. The standard approach is to calculate the SCC using a discounted-utilitarian social welfare function (SWF)—one that simply adds up the well-being numbers (utilities) of individuals, as discounted by a weighting factor that decreases with time. The discounted-utilitarian SWF has been criticized both for ignoring the distribution of well-being, and for including an arbitrary preference for earlier generations. Here, we use a prioritarian SWF, with no time-discount factor, to calculate the SCC in the integrated assessment model RICE. Prioritarianism is a well-developed concept in ethics and theoretical welfare economics, but has been, thus far, little used in climate scholarship. The core idea is to give greater weight to well-being changes affecting worse off individuals. We find substantial differences between the discounted-utilitarian and non-discounted prioritarian SCC
    corecore